OpenStack Kilo : Cinder Storage(GlusterFS)
2015/06/20 |
It's possible to use the Virtual Storage provided by Cinder if an Instance needs more disks.
Configure Virtual storage with GlusterFS backend on here.
+------------------+ +------------------+ 10.0.0.50| [ Storage Node ] | 10.0.0.61| | +------------------+ +-----+ Cinder-Volume | +-----+ GlusterFS #1 | | [ Control Node ] | | eth0| | | eth0| | | Keystone |10.0.0.30 | +------------------+ | +------------------+ | Glance |------------+------------------------------+ | Nova API |eth0 | +------------------+ | +------------------+ | Cinder API | | eth0| [ Compute Node ] | | eth0| | +------------------+ +-----+ Nova Compute | +-----+ GlusterFS #2 | 10.0.0.51| | 10.0.0.62| | +------------------+ +------------------+ |
[1] |
GlusterFS server is required to be running on your LAN.
This example uses a replication volume "vol_replica" provided by "gfs01" and "gfs02".
|
[2] | Configure Storage Node. |
# install from EPEL
[root@storage ~]#
yum --enablerepo=epel -y install glusterfs glusterfs-fuse
[root@storage ~]#
vi /etc/cinder/cinder.conf # add follows in the [DEFAULT] section enabled_backends = glusterfs # add follwos to the end [glusterfs] volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver glusterfs_shares_config = /etc/cinder/glusterfs_shares glusterfs_mount_point_base = $state_path/mnt
[root@storage ~]#
vi /etc/cinder/glusterfs_shares # create new : specify GlusterFS volumes gfs01.srv.world:/vol_replica chmod 640 /etc/cinder/glusterfs_shares [root@storage ~]# chgrp cinder /etc/cinder/glusterfs_shares [root@storage ~]# systemctl restart openstack-cinder-volume |
[3] | Configure Compute Node to mount GlusterFS volume. |
# install from EPEL
[root@node01 ~]#
yum --enablerepo=epel -y install glusterfs glusterfs-fuse
[root@node01 ~]#
vi /etc/nova/nova.conf # add follows in the [DEFAULT] section osapi_volume_listen=0.0.0.0 volume_api_class=nova.volume.cinder.API systemctl restart openstack-nova-compute |
[4] | For example, create a virtual disk "disk01" with 10GB. It's OK to work on any node. (This example is on Control Node) |
# set environment variable first [root@dlp ~(keystone)]# echo "export OS_VOLUME_API_VERSION=2" >> ~/keystonerc [root@dlp ~(keystone)]# source ~/keystonerc
cinder create --display_name disk01 10 +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2015-06-20T17:39:39.000000 | | description | None | | encrypted | False | | id | 6d712520-5c99-4d59-8a12-c335c373f430 | | metadata | {} | | multiattach | False | | name | disk01 | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 98ea1b896d3a48438922c0dfa9f6bc52 | | os-volume-replication:driver_data | None | | os-volume-replication:extended_status | None | | replication_status | disabled | | size | 10 | | snapshot_id | None | | source_volid | None | | status | creating | | user_id | 704a7f5cf84a479796e10f47c30bb629 | | volume_type | None | +---------------------------------------+--------------------------------------+[root@dlp ~(keystone)]# cinder list +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ | 6d712520-5c99-4d59-8a12-c335c373f430 | available | disk01 | 10 | None | false | | +--------------------------------------+-----------+--------+------+-------------+----------+-------------+ |
[5] | Attach the virtual disk to an Instance. For the exmaple below, the disk is connected as "/dev/vdb". It's possible to use it as a storage to create a file system on it. |
[root@dlp ~(keystone)]# nova list +-----------+----------+---------+------------+-------------+-----------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +-----------+----------+---------+------------+-------------+-----------------------------------+ | 808634ec- | CentOS_7 | SHUTOFF | - | Shutdown | int_net=192.168.100.3, 10.0.0.201 | +-----------+----------+---------+------------+-------------+-----------------------------------+[root@dlp ~(keystone)]# nova volume-attach CentOS_7 6d712520-5c99-4d59-8a12-c335c373f430 auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | 6d712520-5c99-4d59-8a12-c335c373f430 | | serverId | 2c7a1025-30d6-446a-a4ff-309347b64eca | | volumeId | 6d712520-5c99-4d59-8a12-c335c373f430 | +----------+--------------------------------------+ # the status of attached disk turns "in-use" like follows [root@dlp ~(keystone)]# cinder list +--------------------------+--------+--------+------+-------------+----------+---------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------+--------+--------+------+-------------+----------+---------------------------+ | 6d712520-5c99-4d59-8a12- | in-use | disk01 | 10 | None | false | 2c7a1025-30d6-446a-a4ff-3 | +--------------------------+--------+--------+------+-------------+----------+---------------------------+ |